专利摘要:
method of reproducing stereophonic sound, equipment for reproducing stereophonic sound, and computer readable recording medium a method of reproducing stereophonic sound, the method including: acquiring an image depth information indicating a distance between at least one object in an image signal and a place of reference; acquiring sound depth information indicating a distance between at least one sound object in a sound signal and a reference location based on the image depth information; and providing sound perspective for at least one sound object based on sound depth information.
公开号:BR112012023504B1
申请号:R112012023504-4
申请日:2011-03-17
公开日:2021-07-13
发明作者:Yong-choon Cho;Sun-min Kim
申请人:Samsung Electronics Co., Ltd.;
IPC主号:
专利说明:

TECHNICAL FIELD
[0001] The present invention relates to a method and equipment to reproduce stereophonic sound and, more specifically, to a method and equipment to reproduce stereophonic sound that provides perspective to a sound object. FUNDAMENTALS OF THE TECHNIQUE
[0002] Due to the development of imaging technology, a user can see a stereoscopic 3D image. Stereoscopic 3D imaging exposes left viewpoint image data for a left eye and right viewpoint image data for a right eye considering binocular disparity. A user can recognize an object that appears to realistically jump off a screen or move towards the bottom of the screen through 3D imaging technology.
[0003] Furthermore, in conjunction with the development of imaging technology, users' interest in sound has increased and in particular, stereophonic sound has developed sharply. In stereophonic sound technology, multiple speakers are placed around a user so that the user can experience the location in different locations and perspective. However, in stereophonic sound technology, an image object that approaches the user or becomes more distant from the user may not be efficiently represented so that the sound effect corresponding to a 3D image may not be provided. DESCRIPTION OF DRAWINGS
[0004] Figure 1 is a block diagram of an equipment for reproducing stereophonic sound according to an embodiment of the present invention;
[0005] Figure 2 is a block diagram of a depth-of-sound information acquisition unit of Figure 1 according to an embodiment of the present invention;
[0006] Figure 3 is a block diagram of one of the depth of sound information acquisition unit of Figure 1 according to another embodiment of the present invention;
[0007] Figure 4 is a graph illustrating a predetermined function used to determine a sound depth value in units of determination according to an embodiment of the present invention;
[0008] Figure 5 is a block diagram of a perspective provision unit that provides stereophonic sound using a stereo sound signal according to an embodiment of the present invention;
[0009] Figures 6A to 6D illustrate the provision of stereophonic sound in the equipment for reproducing stereophonic sound of Figure 1 according to an embodiment of the present invention;
[00010] Figure 7 is a flowchart illustrating a method of detecting a location of a sound object based on a sound signal according to an embodiment of the present invention;
[00011] Figures 8A to 8D illustrate the detection of a location of a sound object from a sound signal according to an embodiment of the present invention; and
[00012] Figure 9 is a flowchart illustrating a method of reproducing stereophonic sound according to an embodiment of the present invention. BEST WAY
[00013] The present invention provides a method and equipment to efficiently reproduce stereophonic sound, and particularly a method and equipment to reproduce stereophonic sound that efficiently represents the sound that approaches a user or becomes more distant from the user by providing perspective to a sound object.
[00014] According to an aspect of the present invention, there is provided a method of reproducing stereophonic sound, the method including acquiring image depth information indicating a distance between at least one object in an image signal and a reference location ; acquiring sound depth information indicating a distance between at least one sound object in a sound signal and a reference location based on the image depth information; and providing sound perspective to at least one sound object based on sound depth information.
[00015] The acquisition of sound depth information includes acquiring a maximum depth value for each image section that makes up the image signal; and
[00016] Acquire a sound depth value for the at least one sound object based on the maximum depth value.
[00017] Depth of sound value acquisition includes determining the depth of sound value as a minimum value when the maximum depth value is less than a first threshold value and determining the depth of sound value as a maximum value when the maximum depth value is equal to or greater than a second threshold value.
[00018] The acquisition of the depth of sound value further includes determining the depth of sound value in proportion to the maximum depth value when the maximum depth value is equal to or greater than the first threshold value and less than the second limit value.
[00019] The acquisition of sound depth information includes acquiring location information about at least one image object in the image location and location information about the at least one sound object in the sound signal; determine whether the location of the at least one image object matches the location of the at least one sound object; and acquire the sound depth information based on the determination result.
[00020] The acquisition of sound depth information includes acquiring an average depth value for each image section that makes up the image signal; and acquire a sound depth value for the at least one sound object based on the average depth value.
[00021] Sound depth value acquisition includes determining the sound depth value as a minimum value when the average depth value is less than a third threshold value.
[00022] Depth of sound value acquisition includes determining the depth of sound value as a minimum value when a difference between the average depth value in a previous section and an average depth value in a current section is less than a fourth limit value.
[00023] Providing sound perspective includes controlling the power of the sound object based on sound depth information.
[00024] Providing sound perspective includes controlling a gain and delay time of a generated reflection signal such that the sound object is reflected based on the sound depth information.
[00025] Providing sound perspective includes controlling the intensity of a low frequency range component of the sound object based on sound depth information.
[00026] The sound perspective provision includes controlling a difference between a phase of the sound object to be emitted through a first speaker and a phase of the sound object to be emitted through a second speaker.
[00027] The method further includes outputting the sound object, which is provided with the sound perspective, through at least one of a left ambient speaker and a right ambient speaker, and a left front speaker and a right front speaker.
[00028] The method also includes guiding an external phase to the speakers by using the sound signal.
[00029] The acquisition of sound depth information includes determining a sound depth value for the at least one sound object based on a size of each of the at least one image object.
[00030] The acquisition of sound depth information includes determining a sound depth value for the at least one sound object based on the distribution of the at least one image object.
[00031] According to another aspect of the present invention, an equipment for reproducing stereophonic sound is provided, the equipment including an image depth information acquisition unit for acquiring image depth information indicating a distance between at least one object in an image signal and a reference location; a sound depth information acquisition unit for acquiring sound depth information indicating a distance between at least one sound object in a sound signal and a reference location based on the image depth information; and a perspective providing unit for providing sound perspective to the at least one sound object based on sound depth information. MODE OF INVENTION
[00032] In the following, one or more embodiments of the present invention will be more fully described with reference to the accompanying drawings.
[00033] First, for the convenience of description, the terminologies used herein are briefly defined as follows.
[00034] An image object denotes an object included in an image signal or an individual such as a person, an animal, a plant, and the like.
[00035] A sound object denotes a sound component included in a beep. Various sound objects can be included in a beep. For example, in a sound signal generated by recording an orchestral performance, various sound objects generated from various musical instruments such as guitar, violin, oboe, and the like are included.
[00036] A sound source is an object (eg a musical instrument or a vocal band) that generates a sound object. In this descriptive report, an object that effectively generates a sound object; and an object that recognizes that a user generates a sound object; denote a source of sound. For example, when an apple is thrown at a user from a screen while the user is watching a movie, a sound (sound object) generated when the apple is moving can be included in a beep. The sound object can be obtained by recording a sound actually generated when an apple is thrown, or it can be a previously recorded sound object that is simply played back. However, in either case, a user recognizes that an apple generated the sound object and so the apple can be a sound source as defined in this descriptive report.
[00037] Image depth information indicates a distance between a background and a reference location and a distance between an object and a reference location. The reference location can be a surface of a display device from which an image is emitted.
[00038] Sound depth information indicates a distance between a sound object and a reference location. More specifically, sound depth information indicates a distance between a location (a location of a sound source) where a sound object is generated and a reference location.
[00039] As described above, when an apple is moving towards a user from a screen while the user is watching a movie, a distance between a sound source and the user becomes close. To efficiently represent that an apple is approaching, it can be represented that a sound object generation location that corresponds to an image object is gradually getting closer to the user and information about this is included in the sound depth information. The reference location may vary depending on a sound source location, a speaker location, a user location, and the like.
[00040] Sound perspective is one of the senses a user experiences with respect to a sound object. A user sees a sound object so that the user can recognize a location where a sound object is generated, that is, a location of a sound source that generates the sound object. Here, a perception of distance between the user and the sound source that is recognized by the user denotes the perspective of sound.
[00041] Figure 1 is a block diagram of an equipment 100 for reproducing stereophonic sound according to an embodiment of the present invention.
[00042] The apparatus 100 for reproducing stereophonic sound according to the present embodiment of the present invention includes an image depth information acquisition unit 110, a sound depth information acquisition unit 120, and a sound provision unit perspective 130.
[00043] The image depth information acquisition unit 110 acquires image depth information indicating a distance between at least one image object in an image signal and a reference location. Image depth information can be a depth map indicating depth values of the pixels that make up an image object or background.
[00044] The sound depth information acquisition unit 120 acquires sound depth information which indicates a distance between a sound object and a reference location based on the image depth information. There can be various methods of generating the depth of sound information using image depth information, and then two methods of generating the depth of sound information will be described. However, the present invention is not limited thereto.
[00045] For example, the sound depth information acquisition unit 120 can acquire sound depth values for each sound object. The sound depth information acquisition unit 120 acquires location information about the image objects and location information about the sound object and combines the image objects with the sound objects based on the location information. Then, based on image depth information and combination information, sound depth information can be generated. Such an example will be described in detail with reference to Figure 2.
[00046] As another example, the sound depth information acquisition unit 120 can acquire sound depth values according to the sound sections constituting a sound signal. The sound signal comprises at least one sound section. Here, a beep in a section can have the same sound depth value. That is, for each different sound object, the same sound depth value can be applied. The sound depth information acquisition unit 120 acquires image depth values for each image section constituting an image signal. The image section can be obtained by dividing an image signal by means of frame units or scene units. The depth of sound information acquisition unit 120 acquires an a (eg maximum depth value, a minimum depth value, or an average depth value) in each image section and determines the depth of sound value in the sound section that corresponds to the image section by using the representative depth value. Such an example will be described in detail with reference to Figure 3.
[00047] Perspective providing unit 130 processes a sound signal so that a user can perceive sound perspective based on sound depth information. The perspective providing unit 130 can provide the sound perspective according to each sound object after the sound objects corresponding to the image objects are extracted, provide the sound perspective according to each channel included in a sound signal, or provide the sound perspective for all beeps.
[00048] The perspective provision unit 130 performs at least one of the following four tasks: (i), (ii), (iii) and (iv) for a user to efficiently detect the perspective of sound. However, the four tasks performed in the perspective provision unit 130 are just an example, and the present invention is not limited thereto. i) The perspective provision unit 130 adjusts the power of a sound object based on the sound depth information. The closer the sound object is generated to a user, the more the power of the sound object increases. ii) The perspective provision unit 130 adjusts a gain and delay time of a reflection signal based on the depth of sound information. A user hears a direct beep that is not reflected by an obstacle and also a reflection beep generated by being reflected by an obstacle. The reflection beep has a lower intensity than the direct beep and generally approaches a user by being delayed for a predetermined amount of time compared to the direct beep. Particularly, when a sound object is generated close to a user, the reflection sound signal arrives later compared to the direct sound signal and its intensity is remarkably reduced. iii) The perspective provision unit 130 adjusts a low frequency band component of a sound object based on sound depth information. When the sound object is generated close to a user, the user can remarkably recognize the low frequency band component. iv) The perspective supply unit 130 adjusts a phase of a sound object based on the sound depth information. When a difference between a phase of a sound object to be emitted from a first speaker and a phase of a sound object to be emitted from a second speaker increases, a user recognizes that the sound object is closer .
[00049] Operations of the perspective provision unit 130 will be described in detail with reference to Figure 5.
[00050] Figure 2 is a block diagram of the depth of sound information acquisition unit 120 of Figure 1 according to an embodiment of the present invention.
[00051] The sound depth information acquisition unit 120 includes a first location acquisition unit 210, a second location acquisition unit 220, a combination unit 230, and a determination unit 240.
[00052] The first location acquisition unit 210 acquires location information of an image object based on the image depth information. The first location acquisition unit 210 can only acquire location information about an image object in which a left and right or back and forth movement in an image signal is detected.
[00053] The first location acquisition unit 210 compares depth maps over successive image frames based on Equation 1 below and identifies the coordinates at which a change in depth values increases.

[00054] In Equation 1, i indicates the number of frames and x,y indicates the coordinates. Consequently, Iix,y indicates a depth value of the 1st frame in coordinates (x,y).
[00055] The first location acquisition unit 210 searches for coordinates where DIffix,y is above a threshold value, after DIffix,y s is calculated for all coordinates. The first location acquisition unit 210 determines an image object corresponding to the coordinates, where Diffix,y is above a threshold, as an image object whose motion is detected, and the corresponding coordinates are determined as an object location. of image.
[00056] The second location acquisition unit 220 acquires location information about a second object based on a beep. There may be various methods of acquiring location information about the sound object via the second location acquisition unit 220.
[00057] For example, the second location acquisition unit 220 separates a main component and an ambience component from a sound signal, compares the main component with the ambience component, and thereby acquires location information about the sound object. Furthermore, the second location acquisition unit 220 compares the powers of each channel of a sound signal and thus acquires location information about the sound object. In this method, left and right locations of the sound object can be identified.
[00058] As another example, the second location acquisition unit 220 divides a sound signal into a plurality of sections, calculates the power of each frequency band in each section, and determines a common frequency band based on the power through of each frequency range. In this descriptive report, the common frequency range denotes a common frequency range in which the power is above a predetermined threshold value, in adjacent sections. For example, frequency ranges having power above "A" are selected in a current section and frequency ranges having power above "A" are selected in a previous section (or frequency ranges having power within the fifth category, raised by a current section is selected in a current section and frequency bands having power within the fifth category raised in a previous section are selected in a previous section). Then, the frequency range that is commonly selected in the previous section and in the current section is determined as the common frequency range.
[00059] Limiting the frequency ranges above a threshold value is done to acquire a location of a sound object having broad signal strength. Consequently, the influence of a sound object having small signal strength is minimized and the influence of a main sound object can be maximized. How the common frequency range is determined, if a new sound object that does not exist in the previous section is generated in the current section or if a characteristic (for example, a generation location) of a sound object that exists in the previous section is changed, can be determined.
[00060] When a location of an image object is changed to a depth direction of a display device, the power of a sound object that matches the image object is changed. In this case, the power of a frequency range that corresponds to the sound object is changed and thus a location of the sound object in a depth direction can be identified by examining a power change in each frequency range.
[00061] The combination unit 230 determines the relationship between the image object and a sound object based on location information about the image object and location information about the sound object. The combining unit 230 determines that the image object matches the sound object when a difference between the coordinates of the image object and the coordinates of the sound object is within a threshold value. On the other hand, the combining unit 230 determines that the image object does not match the sound object when a difference between the image object coordinates and the sound object coordinates is above a threshold value.
[00062] The determination unit 240 determines a sound depth value for the sound object based on the determination by the combination unit 230. For example, and a sound object determined to match a picture object, a value of Sound depth is determined according to a depth value of the image object. In a sound object determined to not match an image object, a sound depth value is determined as a minimum value. When the sound depth value is determined as a minimum value, the perspective provision unit 130 does not provide sound perspective for the sound object.
[00063] When the locations of the image object and the sound object do not match each other, the determination unit 240 may not provide sound perspective for the sound object in predetermined exceptional circumstances.
[00064] For example, when a size of an image object is below a threshold value, the determination unit 240 may not provide sound perspective for the sound object that corresponds to the image object. As an image object having a very small size slightly affects a user to experience a 3D effect, the determination unit 240 may not provide sound perspective for the corresponding sound object.
[00065] Figure 3 is a block diagram of the depth of sound information acquisition unit 120 of Figure 1 according to another embodiment of the present invention.
[00066] The sound depth information acquisition unit 120 according to the current embodiment of the present invention includes a section depth information acquisition unit 310 and a determination unit 320.
[00067] The section depth information acquisition unit 310 acquires depth information for each image section based on the image depth information. An image signal can be divided into several sections. For example, the image signal can be divided into scene units, by which a scene is converted, into image frame units, or GOP units.
[00068] The section depth information acquisition unit 310 acquires image depth values corresponding to each section. The section depth information acquisition unit 310 can acquire image depth values corresponding to each section based on Equation 2 below.

[00069] In Equation 2, Iix,y indicates a depth value of 1st frame in coordinates (x,y). Depthi is an image depth value corresponding to the 1st frame and is obtained by averaging the depth values of all pixels in the 1st frame.
[00070] Equation 2 is just an example, and the maximum depth value, the minimum depth value, or a depth value of a pixel at which a change from a previous section is remarkably large can be determined as a representative depth value of a section.
[00071] The determination unit 320 determines a sound depth value for a sound section that corresponds to an image section based on a representative depth value of each section. The determination unit 320 determines the sound depth value according to a predetermined function for which the representative depth value of each section is input. The determination unit 320 can use a function, in which an input value and an output value are constantly proportional to each other, and a function, in which an output value increases exponentially in accordance with an input value, such as the function predetermined. In another embodiment of the present invention, functions that differ from each other according to a range of input values can be used as the predetermined function. Examples of the predetermined function used by the determination unit 320 to determine the sound depth value will be described later with reference to Figure 4.
[00072] When the determination unit 320 determines that sound perspective need not be provided to a sound section, the sound depth value in the corresponding sound section can be determined as a minimum value.
[00073] The determination unit 320 can acquire a difference in depth values between a 1st image frame and an I+1° image frame which are adjacent to each other according to Equation 3 below.

[00074] Diff_Depthi indicates a difference between an average image depth value in the I° frame and an average image depth value in the I+1° frame.
[00075] The determination unit 320 determines whether to provide sound perspective for a sound section that corresponds to a 1st frame in accordance with Equation 4 below.

[00076] R_Flagi is an indicator indicating whether it provides sound perspective to a sound section that corresponds to the 1st frame. When R_Flagi has a value of 0, sound perspective is given to the corresponding sound section and where R_Flagi has a value of 1, sound perspective is not given to the corresponding sound section.
[00077] When a difference between an average image depth value in a previous frame and an average image depth value in a next frame is large, it can be determined that there is a high possibility that an image object jumping from a screen exists in the next frame. Consequently, the determining unit 320 can determine which sound perspective is provided to a sound section that corresponds to a picture frame only when Diff_Depthi is above a threshold value.
[00078] The determination unit 320 determines whether to provide sound perspective to a sound section corresponding to a 1st frame in accordance with Equation 5 below.

[00079] R_Flagi is an indicator indicating whether it provides sound perspective to a sound section that corresponds to the 1st frame. When R_Flagi has a value of 0, the sound perspective is provided to the corresponding sound section and when R_Flagi has a value of 1, the sound perspective is not provided to the corresponding sound section.
[00080] Even if a difference between an average image depth value in a previous frame and an average image depth value in a next frame is large, when an average image depth value in the next frame is below a value limit, there is a high possibility that an image object that wall jumps out of a window does not exist in the next frame. Consequently, the determination unit 320 can determine that sound perspective is provided to a section of sound that corresponds to an image frame only when Depthi is above a threshold value (eg 28 in Figure 4).
[00081] Figure 4 is a graph illustrating a predetermined function used to determine a sound depth value in the determination units 240 and 320 according to an embodiment of the present invention.
[00082] In the default function illustrated in Figure 4, a horizontal axis indicates a depth of image value and a vertical axis indicates a depth of sound value. The sound depth value can have a value in the range 0 to 255.
[00083] When the picture depth value is greater than or equal to 0 and less than 28, the sound depth value is determined as a minimum value. When the sound depth value is set to be the minimum value, sound perspective is not provided to a sound object to a sound section.
[00084] When the picture depth value is greater than or equal to 28 and less than 124, one degree of change in sound depth value according to one degree of change in picture depth value is constant (ie. , a slope is constant). According to the modalities, a sound depth value according to an image depth value may not change linearly and instead may change exponentially or logarithmically.
[00085] In another modality, when the picture depth value is greater than or equal to 28 and less than 56, a fixed sound depth value (eg 58), through which a user can hear natural stereo sound , can be determined as a sound depth value.
[00086] When the picture depth value is greater than or equal to 124, the sound depth value is determined as a maximum value. According to a modality, for the convenience of calculation, the maximum value of sound depth value can be adjusted and used.
[00087] Figure 5 is a block diagram of the perspective provision unit 500 corresponding to the perspective provision unit 130 which provides stereophonic sound using a stereo sound signal according to an embodiment of the present invention.
[00088] When an input signal is a multi-channel sound signal, the present invention can be applied after downmixing the input signal to a stereo signal.
[00089] A fast Fourier transform (FFT) 510 performs a fast Fourier transform on the input signal.
[00090] An inverse fast Fourier transform (IFFT) 520 performs an inverse Fourier transform on the Fourier transformed signal.
[00091] A center signal extractor 530 extracts a center signal, which is a signal that corresponds to a center channel, from a stereo signal. Center signal extractor 530 extracts a signal that has a high correlation to the stereo signal as a center channel signal. In Figure 5, it is assumed that sound perspective is provided for the center channel signal. However, the sound perspective can be provided to other channel signals other than the center channel signals, such as at least one of the left and right front channel signals, and left and right environmental channel signals, a specific sound object, or an entire beep.
[00092] A sound stage extension unit 550 extends one sound stage. The sound stage extension unit 550 directs a sound stage out of a speaker by artificially providing a time difference or a phase difference for the stereo signal.
[00093] The sound depth information acquisition unit 560 acquires sound depth information based on the image depth information.
[00094] A parameter calculator 570 determines a control parameter value necessary to provide sound perspective to a sound object based on sound depth information.
[00095] A 571 level controller controls the strength of an input signal.
[00096] A 572 phase controller controls one phase of the input signal.
[00097] A reflection effect provision unit 573 models a reflection signal generated such that an input signal is reflected by light on a wall.
[00098] A near field effect provision unit 574 models a sound signal generated near a user.
[00099] A 580 mixer mixes at least one signal and outputs the mixed signal to a speaker.
[000100] Next, the operation of a perspective provision unit 500 for reproducing stereophonic sound will be described according to the order of time.
[000101] First, when a multi-channel sound signal is input, the multi-channel sound signal is converted to a stereo signal through a downmixer (not shown).
[000102] The FFT 510 performs fast Fourier transform on the stereo signals and then outputs the transformed signals to the central signal extractor 530.
[000103] The center signal extractor 530 compares the transformed stereo signals to each other and outputs a signal that has high correlation as a center channel signal.
[000104] The sound depth information acquisition unit 560 acquires the sound depth information based on the image depth information. The acquisition of depth of sound information via depth of sound information acquisition unit 560 is described above with reference to Figures 2 and 3. More specifically, depth of sound information acquisition unit 560 compares a location of a sound object with a location of an image object, thereby acquiring the sound depth information, or uses the depth information of each section in an image signal, thereby acquiring the sound depth information.
[000105] The 570 parameter calculator calculates the parameters to be applied to the modules used to provide sound perspective based on the index values.
[000106] Phase controller 572 reproduces two signals from a center channel signal and controls the phases of at least one of the two reproduced signals according to parameters calculated by parameter calculator 570. When a sound signal having different phases is played through a left speaker and a right speaker, a blur phenomenon is generated. When the blur phenomenon intensifies, it is difficult for a user to accurately recognize a location where the sound object is generated. In this regard, when one method of controlling a phase is used in conjunction with another method of providing perspective, the effect of providing perspective can be maximized.
[000107] As the location where the sound object is generated approaches a user (or as the location quickly approaches the user), the 572 phase controller sets a phase difference of the reproduced signals to be greater. The reproduced signals whose phases are controlled are transmitted to the reflection effect provision unit 573 through the IFFT 520.
[000108] The reflection effect provision unit 573 models a reflection signal. When a sound object is generated at a distance from a user, direct sound that is transmitted directly to a user without being reflected by light on a wall is similar to reflected sound generated by being reflected by light on a wall, and a time difference in the arrival of the direct sound and the reflection sound does not exist. However, when a sound object is generated close to a user, the intensities of direct sound and reflection sound are different from each other and the time difference in arrival of direct sound and reflection sound is large. Consequently, when the sound object is generated close to the user, the reflection effect provision unit 573 remarkably reduces a gain value of the reflection signal, increases the delay time, or relatively increases the intensity of the direct sound. The reflection effect provision unit 573 transmits the center channel signal in which the reflection signal is considered to the near field effect provision unit 574.
[000109] The near field effect provision unit 574 models the sound object generated near the user based on the parameters calculated in the parameter calculator 570. When the sound object is generated near the user, it increases a lower band component . The near field effect provision unit 574 boosts a low-band component of a center signal when a location where the generated sound object is close to the user.
[000110] The sound stage extension unit 550, which receives the input stereo signal, processes the stereo signal so that a sound phase is oriented away from a speaker. When the speaker locations are far enough apart, a user can realistically hear stereophonic sound.
[000111] The sound stage stage extension unit 550 converts a stereo signal into a stereo magnification signal. The sound stage extension unit 550 may include a magnification filter, which convolutes left/right binaural synthesis with a crosstalk canceller, and a panorama filter, which convolutes a magnification filter and a direct left/right filter. Here, the magnification filter forms the stereo signal via a virtual sound source to an arbitrary location based on a head-related transfer function (HRTF) measured at a predetermined location and cancels the crosstalk of the virtual sound source with based on a filter coefficient, to which the HRTF is reflected. Direct left/right filter controls a signal characteristic such as gain and delay between an original stereo signal and the virtual crosstalk canceled sound source.
[000112] The 571 level controller controls the power intensity of a sound object based on the sound depth value calculated in the 570 parameter calculator. When the sound object is generated close to a user, the 571 level controller can increase a size of the sound object.
[000113] The mixer 580 mixes the stereo signal transmitted from the level controller 571 with the center signal transmitted from the near field effect provision unit 574 to output the mixed signal to a speaker.
[000114] Figures 6A to 6D illustrate the provision of a stereophonic sound in the equipment 100 for reproducing stereophonic sound according to an embodiment of the present invention.
[000115] In Figure 6A, a stereophonic sound object according to an embodiment of the present invention is not operated.
[000116] A user hears a sound object through at least one speaker. When a user reproduces a mono signal using one speaker, the user may not experience a stereoscopic sensation and when the user reproduces a stereo signal using at least two speakers, the user may experience a stereoscopic sensation.
[000117] In Figure 6B, a sound object having a sound depth value of "0" is played. In Figure 4, the sound depth value is assumed to be from "0" to "1". In the sound object represented as being generated close to the user, the sound depth value increases.
[000118] As the sound object's sound depth value is "0", a task to provide perspective to the sound object is not performed. However, because a sound phase is oriented towards the outside of a speaker, a user can experience a stereoscopic sensation through the stereo signal. According to modalities, the technology by which a sound phase is oriented away from a speaker is referred to as "magnification" technology.
[000119] In general, multi-channel sound signals are required to reproduce a stereo signal. Consequently, when a mono signal is input, sound signals corresponding to at least two channels are generated through upmixing.
[000120] In stereo signal, a beep from a first channel is played through a left speaker and a beep from a second channel is played through a right speaker. A user can experience a stereoscopic sensation by listening to at least two sound signals generated from each different location.
[000121] However, when the left speaker and right speaker are close to each other, a user may recognize that the sound is generated in the same location and thus may not experience a stereoscopic sensation. In this case, an audible signal is processed so that the user can recognize that the sound is generated outside the speaker, rather than having it actually speaker.
[000122] In Figure 6C, a sound object that has a sound depth value of "0.3" is played.
[000123] As the sound depth value of the sound object is greater than 0, the perspective corresponding to the sound depth value of "0.3" is provided to the sound object together with the magnification technology. Consequently, the user can recognize that the sound object is generated close to the user, compared to Figure 6B.
[000124] For example, it is assumed that a user sees 3D image data and an image object represented as appearing to jump off a screen. In Figure 6C, perspective is provided to the sound object that corresponds to an image object so that the sound object is processed as it approaches the user. The user visibly detects that the image object pops out and the sound object approaches the user, thereby realistically experiencing a stereoscopic sensation.
[000125] In Figure 6D, a sound object having a sound depth value of "1" is played.
[000126] As the sound depth value of the sound object is greater than 0, the perspective corresponding to the sound depth value of "1" is provided to the sound object in conjunction with the magnification technology. Since the sound depth value of the sound object in Figure 6D is greater than that of the sound object in Figure 6C, a user recognizes that the sound object is generated closer to the user than in Figure 6C.
[000127] Figure 7 is a flowchart illustrating a method of detecting a location of a sound object based on a sound signal according to an embodiment of the present invention.
[000128] In S710 operation, the power of each frequency range is calculated for each of several sections that constitute a sound signal.
[000129] In S720 operation, a common frequency range is determined based on the power of each frequency range.
[000130] Common frequency range denotes a frequency range in which the power in previous sections and the power in a current section are all above a predetermined threshold value. Here, the frequency range having little power can correspond to a meaningless sound object such as noise and thus the frequency range having little power can be excluded from the common frequency range. For example, after a predetermined number of frequency bands are sequentially selected according to the highest power, the common frequency band can be determined from the selected frequency band.
[000131] In S730 operation, the power of the common frequency range in the previous sections is compared with the power of the common frequency range in the current section and a depth of sound value is determined based on a result of the comparison. When the power of the common frequency range in the current section is greater than the power of the common frequency range in the previous sections, it is determined that the sound object corresponding to the common frequency range is generated closer to the user. Furthermore, when the power of the common frequency range in the previous sections is similar to the power of the common frequency range in the current section, it is determined that the sound object does not come too close to the user.
[000132] Figures 8A to 8D illustrate the detection of a location of a sound object from a sound signal according to an embodiment of the present invention.
[000133] In Figure 8A, a beep divided into a plurality of sections is illustrated along a time axis.
[000134] In Figures 8B to 8D, the powers of each frequency range in the first, second and third section 801, 802 and 803 are illustrated. In Figures 8B to 8D, the first and second section 801 and 802 are previous sections and the third section 803 is a current section.
[000135] With reference to Figures 8B and 8C, when it is assumed that the powers of the 3,000 to 4,000 Hz, 4,000 to 5,000 Hz, and 5,000 to 6,000 Hz frequency ranges are above a threshold value in the first to third section, the frequency ranges 3,000 to 4,000 Hz, 4,000 to 5,000 Hz, and 5,000 to 6,000 Hz are determined as the common frequency range.
[000136] With reference to Figures 8C and 8D, the powers of the 3,000 to 4,000 Hz and 4,000 to 5,000 Hz frequency ranges in the second 802 section are similar to the powers of the 3,000 to 4,000 Hz and 4,000 to 5,000 Hz frequency ranges Hz in the third section 803. Consequently, a sound depth value of a sound object that corresponds to the fixed frequency 3000 to 4000 Hz and 4000 to 5000 Hz is determined at "0".
[000137] However, the power of the 5,000 to 6,000 Hz frequency range in the third section 803 is markedly increased compared to the power of the 5,000 to 6,000 Hz frequency range in the second 802 section. of a sound object that corresponds to the 5,000 to 6,000 Hz frequency range is determined as "0". According to the modalities, an image depth map can be referred to to accurately determine the 8m sound value and sound depth of a sound object.
[000138] For example, the power of the 5,000 to 6,000 Hz frequency range in the third section 803 is markedly increased compared to the power of the 5,000 to 6,000 Hz frequency range in the second 802 section. where the sound object corresponding to the frequency range of 5,000 to 6,000 Hz is generated, it is not close to the user and instead just the power increases in the same place. Here, when an image object that projects from a screen exists in an image frame that corresponds to the third section 803 with reference to the image depth map, there can be great possibility that the sound object that corresponds to the range frequency range from 5,000 to 6,000 Hz corresponds to the image object. In this case, it may be preferable that a location where the sound object is generated gradually approaches the user and thus a sound depth value of the sound object is set to "0" or greater. When an image object projecting from a screen does not exist in an image frame that corresponds to the third section 803, only the power of the sound object increases at the same location and thus a sound object depth value can be set to "0".
[000139] Figure 9 is a flowchart illustrating a method of reproducing stereophonic sound according to an embodiment of the present invention.
[000140] Figure 9 is a flowchart illustrating a method of reproducing stereophonic sound according to an embodiment of the present invention.
[000141] In S910 operation, image depth information is acquired. Image depth information indicates a distance between at least one image object and the background in a stereoscopic image signal and a reference point.
[000142] In S920 operation, sound depth information is acquired. Sound depth information indicates a distance between at least one sound object in a beep and a reference point.
[000143] In S930 operation, sound perspective is provided to at least one sound object based on sound depth information.
[000144] Embodiments of the present invention can be written as computer programs and can be implemented in commonly used digital computers that run the programs using a computer readable recording medium.
[000145] Examples of computer readable recording media include magnetic storage media (eg ROM, floppy disk, hard disks, etc.), optical recording media (eg CD-ROMs or DVDs), and storage media such as carrier waves (eg transmission over the Internet).
[000146] Although the present invention has been shown and described particularly with reference to its exemplary embodiments, it will be understood by those of ordinary skill in the art that various changes in form and detail can be made without departing from the essence and scope of the present invention as per defined by the following claims.
权利要求:
Claims (21)
[0001]
1. METHOD OF REPRODUCING STEREOPHONIC SOUND, the method comprising: acquiring (S910) an image depth information indicating a distance between at least one object in an image signal and a reference location, the reference location being a position of a user; acquire (S920) sound depth information indicating a distance between at least one sound object in a sound signal and a reference location using a representative depth value for each image section that makes up the image signal, the image section being obtained by a frame unit or a scene unit; and characterized by providing (S930) sound perspective on the at least one sound object based on sound depth information, which is acquired on the basis of image depth information, by using a virtual sound source for a based location in a head-related transfer function (HRTF) measured at a predetermined location and upon controlling force intensity of at least one sound object based on sound depth information so that a size of a sound object is increased when a sound object is generated close to the user.
[0002]
2. Method according to claim 1, characterized in that the acquisition of sound depth information comprises: acquiring the representative depth value as a maximum depth value for each image section constituting the image signal; and acquire a sound depth value for at least one sound object based on the maximum depth value.
[0003]
Method according to claim 2, characterized in that the acquisition of the sound depth value comprises determining the sound depth value as a minimum value when the maximum depth value is less than a first threshold value and determining the sound depth value as a maximum value when the maximum depth value is equal to or greater than a second threshold value.
[0004]
Method according to claim 3, characterized in that the acquisition of the sound depth value further comprises determining the sound depth value in proportion to the maximum depth value when the maximum depth value is equal to or greater than the first threshold value and less than the second threshold value.
[0005]
Method according to claim 1, characterized in that the acquisition of the sound depth information comprises: acquiring location information about at least one image object in the image signal and location information about the at least one sound object on the beep; determining whether the location of the at least one image object matches the location of the at least one sound object; and acquire the sound depth information based on a determination result.
[0006]
6. Method according to claim 1, characterized in that the acquisition of the sound depth information comprises: acquiring the representative depth value as an average depth value for each image section that constitutes the image signal; and acquire a sound depth value for at least one sound object based on the average depth value.
[0007]
Method according to claim 6, characterized in that the acquisition of the sound depth value comprises determining the sound depth value as a minimum value when the average depth value is less than a third threshold value.
[0008]
8. Method according to claim 6, characterized in that the acquisition of the sound depth value comprises determining the sound depth value as a minimum value when a difference between the average depth value in a previous section and a value depth average in a current section is less than a fourth threshold value.
[0009]
9. Method according to claim 1, characterized in that the provision of the sound perspective comprises controlling the power of the sound object based on the sound depth information.
[0010]
10. Method according to claim 1, characterized in that the provision of the sound perspective comprises controlling a gain and delay time of a reflection signal generated such that the sound object is reflected based on the information of depth of sound.
[0011]
A method according to claim 1, characterized in that providing the sound perspective comprises controlling the intensity of a low-frequency range component of the sound object based on the sound depth information.
[0012]
12. Method according to claim 1, characterized in that providing the sound perspective comprises controlling a difference between a phase of the sound object to be emitted through a first speaker and a phase of the sound object to be emitted through a second speaker.
[0013]
13. Method according to claim 1, characterized in that it further comprises the emission of the sound object, for which the sound perspective is provided, through at least one of a left environmental speaker and a right environmental speaker, and a front left speaker and one front right speaker.
[0014]
Method according to claim 1, characterized in that it further comprises guiding a phase out of the speakers by using the sound signal.
[0015]
15. Method according to claim 1, characterized in that the acquisition of the sound depth information comprises determining a sound depth value for the at least one sound object based on a size of each of the at least an image object.
[0016]
The method of claim 1, characterized in that the acquisition of sound depth information comprises determining a sound depth value for at least one sound object based on the distribution of the at least one image object.
[0017]
17. EQUIPMENT FOR REPRODUCING STEREOPHONIC SOUND, the equipment comprising: an image depth information acquisition unit (110) for acquiring image depth information indicating a distance between at least one object in an image signal and a reference location , where the reference location is a position of a user; a sound depth information acquisition unit (120) for acquiring sound depth information indicating a distance between at least one sound object in a sound signal and a reference location using a representative depth value for each image section which constitutes the image signal, the image section being obtained by a frame unit or a scene unit; and characterized by comprising a perspective providing unit (130) for providing sound perspective to at least one sound object based on sound depth information, which is acquired on the basis of image depth information, by using a virtual sound source to a location based on a head-related transfer function (HRTF) measured at a predetermined location and upon controlling force intensity of at least one sound object based on sound depth information such that a size of a sound object is magnified when a sound object is generated close to the user.
[0018]
18. Equipment according to claim 17, characterized in that the sound depth information acquisition unit acquires a maximum depth value for each image section constituting an image signal and a sound depth value for the steel. minus one sound object based on the maximum depth value.
[0019]
Equipment according to claim 18, characterized in that the sound depth information acquisition unit determines the sound depth value as a minimum value when the maximum depth value is less than a first threshold value and determines the sound depth value as a maximum value when the maximum depth value is equal to or greater than a second threshold value.
[0020]
20. Equipment according to claim 18, characterized in that the sound depth value is determined in proportion to the maximum depth value when the maximum depth value is equal to or greater than the first threshold value and less than the second limit value.
[0021]
21. A COMPUTER-READY RECORDING MEDIA, characterized in that it has incorporated therein instructions for performing any of the methods of claims 1 to 16.
类似技术:
公开号 | 公开日 | 专利标题
BR112012023504B1|2021-07-13|METHOD OF REPRODUCING STEREOPHONIC SOUND, EQUIPMENT TO REPRODUCE STEREOPHONIC SOUND, AND COMPUTER-READABLE RECORDING MEDIA
BR112012028272B1|2021-07-06|method of reproducing a stereophonic sound, stereophonic sound reproduction equipment, and non-transient computer readable recording medium
US9554227B2|2017-01-24|Method and apparatus for processing audio signal
JP5893129B2|2016-03-23|Method and system for generating 3D audio by upmixing audio
JP2015206989A|2015-11-19|Information processing device, information processing method, and program
EP2802161A1|2014-11-12|Method and device for localizing multichannel audio signal
JP2001169309A|2001-06-22|Information recording device and information reproducing device
JP2004032726A|2004-01-29|Information recording device and information reproducing device
WO2020014506A1|2020-01-16|Method for acoustically rendering the size of a sound source
同族专利:
公开号 | 公开日
US9622007B2|2017-04-11|
WO2011115430A3|2011-11-24|
BR112012023504A2|2016-05-31|
EP2549777B1|2016-03-16|
AU2011227869A1|2012-10-11|
CA2793720A1|2011-09-22|
KR101844511B1|2018-05-18|
WO2011115430A2|2011-09-22|
CN105933845B|2019-04-16|
KR20110105715A|2011-09-27|
MX2012010761A|2012-10-15|
US20150358753A1|2015-12-10|
JP2013523006A|2013-06-13|
EP3026935A1|2016-06-01|
US9113280B2|2015-08-18|
EP2549777A2|2013-01-23|
CA2793720C|2016-07-05|
US20130010969A1|2013-01-10|
CN105933845A|2016-09-07|
AU2011227869B2|2015-05-21|
MY165980A|2018-05-18|
RU2012140018A|2014-03-27|
RU2518933C2|2014-06-10|
CN102812731A|2012-12-05|
JP5944840B2|2016-07-05|
EP2549777A4|2014-12-24|
CN102812731B|2016-08-03|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

GB9107011D0|1991-04-04|1991-05-22|Gerzon Michael A|Illusory sound distance control method|
JPH06105400A|1992-09-17|1994-04-15|Olympus Optical Co Ltd|Three-dimensional space reproduction system|
JPH06269096A|1993-03-15|1994-09-22|Olympus Optical Co Ltd|Sound image controller|
JP3528284B2|1994-11-18|2004-05-17|ヤマハ株式会社|3D sound system|
CN1188586A|1995-04-21|1998-07-22|Bsg实验室股份有限公司|Acoustical audio system for producing three dimensional sound image|
JPH1063470A|1996-06-12|1998-03-06|Nintendo Co Ltd|Souond generating device interlocking with image display|
JP4086336B2|1996-09-18|2008-05-14|富士通株式会社|Attribute information providing apparatus and multimedia system|
US6504934B1|1998-01-23|2003-01-07|Onkyo Corporation|Apparatus and method for localizing sound image|
JPH11220800A|1998-01-30|1999-08-10|Onkyo Corp|Sound image moving method and its device|
JP2000267675A|1999-03-16|2000-09-29|Sega Enterp Ltd|Acoustical signal processor|
KR19990068477A|1999-05-25|1999-09-06|김휘진|3-dimensional sound processing system and processing method thereof|
RU2145778C1|1999-06-11|2000-02-20|Розенштейн Аркадий Зильманович|Image-forming and sound accompaniment system for information and entertainment scenic space|
WO2001080564A1|2000-04-13|2001-10-25|Qvc, Inc.|System and method for digital broadcast audio content targeting|
US6961458B2|2001-04-27|2005-11-01|International Business Machines Corporation|Method and apparatus for presenting 3-dimensional objects to visually impaired users|
US6829018B2|2001-09-17|2004-12-07|Koninklijke Philips Electronics N.V.|Three-dimensional sound creation assisted by visual information|
RU23032U1|2002-01-04|2002-05-10|Гребельский Михаил Дмитриевич|AUDIO TRANSMISSION SYSTEM|
RU2232481C1|2003-03-31|2004-07-10|Волков Борис Иванович|Digital tv set|
US7818077B2|2004-05-06|2010-10-19|Valve Corporation|Encoding spatial data in a multi-channel sound file for an object in a virtual environment|
KR100677119B1|2004-06-04|2007-02-02|삼성전자주식회사|Apparatus and method for reproducing wide stereo sound|
JP2008512898A|2004-09-03|2008-04-24|パーカー ツハコ|Method and apparatus for generating pseudo three-dimensional acoustic space by recorded sound|
JP2006128816A|2004-10-26|2006-05-18|Victor Co Of Japan Ltd|Recording program and reproducing program corresponding to stereoscopic video and stereoscopic audio, recording apparatus and reproducing apparatus, and recording medium|
KR100688198B1|2005-02-01|2007-03-02|엘지전자 주식회사|terminal for playing 3D-sound And Method for the same|
KR100619082B1|2005-07-20|2006-09-05|삼성전자주식회사|Method and apparatus for reproducing wide mono sound|
EP1784020A1|2005-11-08|2007-05-09|TCL & Alcatel Mobile Phones Limited|Method and communication apparatus for reproducing a moving picture, and use in a videoconference system|
KR100922585B1|2007-09-21|2009-10-21|한국전자통신연구원|SYSTEM AND METHOD FOR THE 3D AUDIO IMPLEMENTATION OF REAL TIME e-LEARNING SERVICE|
KR100934928B1|2008-03-20|2010-01-06|박승민|Display Apparatus having sound effect of three dimensional coordinates corresponding to the object location in a scene|
JP5174527B2|2008-05-14|2013-04-03|日本放送協会|Acoustic signal multiplex transmission system, production apparatus and reproduction apparatus to which sound image localization acoustic meta information is added|
CN101593541B|2008-05-28|2012-01-04|华为终端有限公司|Method and media player for synchronously playing images and audio file|
CN101350931B|2008-08-27|2011-09-14|华为终端有限公司|Method and device for generating and playing audio signal as well as processing system thereof|
JP6105400B2|2013-06-14|2017-03-29|ファナック株式会社|Cable wiring device and posture holding member of injection molding machine|KR101717787B1|2010-04-29|2017-03-17|엘지전자 주식회사|Display device and method for outputting of audio signal|
US8665321B2|2010-06-08|2014-03-04|Lg Electronics Inc.|Image display apparatus and method for operating the same|
US9100633B2|2010-11-18|2015-08-04|Lg Electronics Inc.|Electronic device generating stereo sound synchronized with stereographic moving picture|
JP2012119738A|2010-11-29|2012-06-21|Sony Corp|Information processing apparatus, information processing method and program|
JP5776223B2|2011-03-02|2015-09-09|ソニー株式会社|SOUND IMAGE CONTROL DEVICE AND SOUND IMAGE CONTROL METHOD|
KR101901908B1|2011-07-29|2018-11-05|삼성전자주식회사|Method for processing audio signal and apparatus for processing audio signal thereof|
WO2013184215A2|2012-03-22|2013-12-12|The University Of North Carolina At Chapel Hill|Methods, systems, and computer readable media for simulating sound propagation in large scenes using equivalent sources|
EP2871842A4|2012-07-09|2016-06-29|Lg Electronics Inc|Enhanced 3d audio/video processing apparatus and method|
TW201412092A|2012-09-05|2014-03-16|Acer Inc|Multimedia processing system and audio signal processing method|
CN103686136A|2012-09-18|2014-03-26|宏碁股份有限公司|Multimedia processing system and audio signal processing method|
JP6243595B2|2012-10-23|2017-12-06|任天堂株式会社|Information processing system, information processing program, information processing control method, and information processing apparatus|
JP6055651B2|2012-10-29|2016-12-27|任天堂株式会社|Information processing system, information processing program, information processing control method, and information processing apparatus|
KR20210141766A|2013-07-31|2021-11-23|돌비 레버러토리즈 라이쎈싱 코오포레이션|Processing spatially diffuse or large audio objects|
US10469969B2|2013-09-17|2019-11-05|Wilus Institute Of Standards And Technology Inc.|Method and apparatus for processing multimedia signals|
EP3062535B1|2013-10-22|2019-07-03|Industry-Academic Cooperation Foundation, Yonsei University|Method and apparatus for processing audio signal|
KR20210094125A|2013-12-23|2021-07-28|주식회사 윌러스표준기술연구소|Method for generating filter for audio signal, and parameterization device for same|
EP3122073A4|2014-03-19|2017-10-18|Wilus Institute of Standards and Technology Inc.|Audio signal processing method and apparatus|
KR101856540B1|2014-04-02|2018-05-11|주식회사 윌러스표준기술연구소|Audio signal processing method and device|
US10679407B2|2014-06-27|2020-06-09|The University Of North Carolina At Chapel Hill|Methods, systems, and computer readable media for modeling interactive diffuse reflections and higher-order diffraction in virtual environment scenes|
US9977644B2|2014-07-29|2018-05-22|The University Of North Carolina At Chapel Hill|Methods, systems, and computer readable media for conducting interactive sound propagation and rendering for a plurality of sound sources in a virtual environment scene|
US10187737B2|2015-01-16|2019-01-22|Samsung Electronics Co., Ltd.|Method for processing sound on basis of image information, and corresponding device|
KR102342081B1|2015-04-22|2021-12-23|삼성디스플레이 주식회사|Multimedia device and method for driving the same|
CN106303897A|2015-06-01|2017-01-04|杜比实验室特许公司|Process object-based audio signal|
EP3345410B1|2015-09-04|2019-05-22|Koninklijke Philips N.V.|Method and apparatus for processing an audio signal associated with a video image|
CN106060726A|2016-06-07|2016-10-26|微鲸科技有限公司|Panoramic loudspeaking system and panoramic loudspeaking method|
US10785445B2|2016-12-05|2020-09-22|Hewlett-Packard Development Company, L.P.|Audiovisual transmissions adjustments via omnidirectional cameras|
CN108347688A|2017-01-25|2018-07-31|晨星半导体股份有限公司|The sound processing method and image and sound processing unit of stereophonic effect are provided according to monaural audio data|
US10248744B2|2017-02-16|2019-04-02|The University Of North Carolina At Chapel Hill|Methods, systems, and computer readable media for acoustic classification and optimization for multi-modal rendering of real-world scenes|
CN107613383A|2017-09-11|2018-01-19|广东欧珀移动通信有限公司|Video volume adjusting method, device and electronic installation|
CN107734385B|2017-09-11|2021-01-12|Oppo广东移动通信有限公司|Video playing method and device and electronic device|
EP3713255A4|2017-11-14|2021-01-20|Sony Corporation|Signal processing device and method, and program|
CN108156499A|2017-12-28|2018-06-12|武汉华星光电半导体显示技术有限公司|A kind of phonetic image acquisition coding method and device|
CN109327794B|2018-11-01|2020-09-29|Oppo广东移动通信有限公司|3D sound effect processing method and related product|
CN110572760B|2019-09-05|2021-04-02|Oppo广东移动通信有限公司|Electronic device and control method thereof|
法律状态:
2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-11-05| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-01-05| B06A| Notification to applicant to reply to the report for non-patentability or inadequacy of the application [chapter 6.1 patent gazette]|
2021-05-04| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-07-13| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 17/03/2011, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF, QUE DETERMINA A ALTERACAO DO PRAZO DE CONCESSAO. |
优先权:
申请号 | 申请日 | 专利标题
US31551110P| true| 2010-03-19|2010-03-19|
US61/315.511|2010-03-19|
KR1020110022886A|KR101844511B1|2010-03-19|2011-03-15|Method and apparatus for reproducing stereophonic sound|
KR10/2011-0022886|2011-03-15|
PCT/KR2011/001849|WO2011115430A2|2010-03-19|2011-03-17|Method and apparatus for reproducing three-dimensional sound|
[返回顶部]